Table of Dockernetes

1. Preface

Containers are enabling developers to package their applications (and underlying dependencies) in new ways that are portable and work consistently everywhere? On your machine, in production, in your data center, and in the cloud. And Docker has become the de facto standard for those portable containers in the cloud.

Docker is the developer-friendly Linux container technology that enables creation of your stack: OS, JVM, app server, app, and all your custom configuration. So with all it offers, how comfortable are you and your team taking Docker from development to production? Are you hearing developers say, “But it works on my machine!” when code breaks in production?

This lab offers developers an intro-level, hands-on session with Docker, from installation, to exploring Docker Hub, to crafting their own images, to adding Java apps and running custom containers. It will also explain how to use Swarm to orchesorchestrate these containers together. This is a BYOL (bring your own laptop) session, so bring your Windows, OSX, or Linux laptop and be ready to dig into a tool that promises to be at the forefront of our industry for some time to come.

Note
Latest content of this lab is always at https://github.com/redhat-developer/docker-java

2. Setup Environments

This section describes what, how, and where to install the software needed for this lab. This lab is designed for a BYOL (Brying Your Own Laptop) style hands-on-lab.

2.1. Hardware

  1. Operating System: Mac OS X (10.8 or later), Windows 7 (SP1), Fedora (21 or later)

  2. Memory: At least 4 GB+, preferred 8 GB

2.2. Environment

  1. Edit /etc/resolv.conf (Mac OS or Linux) and leave it as:

TODO - Add instructions for Windows

+

nameserver <INSTRUCTOR IP ADDRESS>

2.3. Software

  1. Java: Oracle JDK 8u45

  2. Web Browser

2.5. Maven

  1. Download Apache Maven from http://classroom.example.com:8082/downloads/apache-maven-3.3.3-bin.zip.

  2. Unzip to a directory of your choice and add it to the PATH.

2.6. VirtualBox

Docker currently runs natively on Linux. It can be configured to run in a virtual machine on Mac or Windows. This is why Virtualbox is a requirement for Mac or Windows.

Warning

Linux Users

  1. Have your kernel updated

  2. Users should have the GNU compiler, build and header files for your current Linux kernel

  3. Create a /usr/src/linux link to the current kernel source

2.7. Vagrant

Download Vagrant from http://classroom.example.com:8082/downloads/vagrant/ and install.

2.8. Docker Machine

Docker Machine makes it really easy to create Docker hosts on your computer, on cloud providers and inside your own data center. It creates servers, installs Docker on them, then configures the Docker client to talk to them.

# Mac
sudo curl -L http://classroom.example.com:8082/downloads/docker/docker-machine_darwin-amd64 -o /usr/local/bin/docker-machine
chmod +x /usr/local/bin/docker-machine

# Linux (Manual install)
sudo curl -L  http://classroom.example.com:8082/downloads/docker/docker-machine_linux-amd64 -o /usr/local/bin/docker-machine
chmod +x /usr/local/bin/docker-machine

#Windows
curl http://classroom.example.com:8082/downloads/docker/docker-machine.exe

2.9. Create Lab Docker Host

  1. Create Docker Host to be used in the lab:

    docker-machine create --driver=virtualbox --virtualbox-boot2docker-url=http://classroom.example.com:8082/downloads/boot2docker.iso --engine-insecure-registry=classroom.example.com:5000 lab
    eval "$(docker-machine env lab)"

    Use the following command on Windows:

    (docker-machine env lab --shell cmd

    And then execute all the set commands.

  2. To make it easier to start/stop the containers, an entry is added into the host mapping table of your operating system. Find out the IP address of your machine:

    docker-machine ip lab

    This will provide the IP address associated with the Docker Machine created earlier.

  3. Edit /etc/hosts (Mac OS or Linux) or C:\Windows\System32\drivers\etc\hosts (Windows) and add:

    <IP ADDRESS>  dockerhost

2.10. Docker Client

Docker Client is used to communicate with Docker Host.

# Mac
sudo curl -L http://classroom.example.com:8082/downloads/docker/docker-latest-mac -o /usr/local/bin/docker
chmod +x /usr/local/bin/docker

# Linux
sudo curl -L http://classroom.example.com:8082/downloads/docker/docker-latest-linux -o docker-latest-linux
chmod +x /usr/local/bin/docker

# Windows
curl -L http://classroom.example.com:8082/downloads/docker/docker-1.8.2.exe -o docker.exe

2.11. WildFly

  1. Download WildFly 9.0.1 from http://classroom.example.com:8082/downloads/wildfly-9.0.1.Final.zip.

  2. Install it by extracting the archive.

2.12. JBoss Developer Studio 9.0.0.GA

To install JBoss Developer Studio stand-alone, complete the following steps:

  1. Download JBDS 9.0.0.GA.

  2. Start the installer as:

    java -jar <JAR FILE NAME>

    Follow the on-screen instructions to complete the installation process.

2.12.1. Create Docker Swarm Cluster

Create Docker Swarm cluster as:

docker run classroom.example.com:5000/swarm create

This will generate a token. Use this token to create a Swarm Master.

docker-machine create -d virtualbox --virtualbox-boot2docker-url=http://classroom.example.com:8082/downloads/boot2docker.iso --engine-insecure-registry=classroom.example.com:5000 --swarm --swarm-master --swarm-discovery token://<token> swarm-master

Detailed explanation for this is available in Java EE Application on Docker Swarm Cluster.

3. Docker Basics

PURPOSE: This chapter introduces the basic terminology of Docker.

Docker is a platform for developers and sysadmins to develop, ship, and run applications. Docker lets you quickly assemble applications from components and eliminates the friction that can come when shipping code. Docker lets you get your code tested and deployed into production as fast as possible.
— docs.docker.com/

Docker simplifies software delivery by making it easy to build and share images that contain your application’s entire environment, or application operating system.

What does it mean by an application operating system ?

Your application typically require a specific version of operating system, application server, JDK, database server, may require to tune the configuration files, and similarly multiple other dependencies. The application may need binding to specific ports and certain amount of memory. The components and configuration together required to run your application is what is referred to as application operating system.

You can certainly provide an installation script that will download and install these components. Docker simplifies this process by allowing to create an image that contains your application and infrastructure together, managed as one component. These images are then used to create Docker containers which run on the container virtualization platform, provided by Docker.

Main Components of Docker

Docker has three main components:

  1. Images are build component of Docker and a read-only template of application operating system.

  2. Containers are run component of Docker, and created from, images.Containers can be run, started, stopped, moved, and deleted.

  3. Images are stored, shared, and managed in a registry, the distribution component of Docker. The publically available registry is known as Docker Hub (available at http://hub.docker.com).

In order for these three components to work together, there is Docker Daemon that runs on a host machine and does the heavy lifting of building, running, and distributing Docker containers. In addition, there is Client that is a Docker binary which accepts commands from the user and communicates back and forth with the daemon.

docker architecture
Figure 1. Docker architecture

Client communicates with Daemon, either co-located on the same host, or on a different host. It requests the Daemon to pull an image from the repository using pull command. The Daemon then downloads the image from Docker Hub, or whatever registry is configured. Multiple images can be downloaded from the registry and installed on Daemon host. Images are run using run command to create containers on demand.

How does a Docker Image work?

We’ve already seen that Docker images are read-only templates from which Docker containers are launched. Each image consists of a series of layers. Docker makes use of union file systems to combine these layers into a single image. Union file systems allow files and directories of separate file systems, known as branches, to be transparently overlaid, forming a single coherent file system.

One of the reasons Docker is so lightweight is because of these layers. When you change a Docker image—for example, update an application to a new version— a new layer gets built. Thus, rather than replacing the whole image or entirely rebuilding, as you may do with a virtual machine, only that layer is added or updated. Now you don’t need to distribute a whole new image, just the update, making distributing Docker images faster and simpler.

Every image starts from a base image, for example ubuntu, a base Ubuntu image, or fedora, a base Fedora image. You can also use images of your own as the basis for a new image, for example if you have a base Apache image you could use this as the base of all your web application images.

Note
By default, Docker obtains these base images from Docker Hub.

Docker images are then built from these base images using a simple, descriptive set of steps we call instructions. Each instruction creates a new layer in our image. Instructions include actions like:

  1. Run a command

  2. Add a file or directory

  3. Create an environment variable

  4. Run a process when launching a container

These instructions are stored in a file called a Dockerfile. Docker reads this Dockerfile when you request a build of an image, executes the instructions, and returns a final image.

How does a Container work?

A container consists of an operating system, user-added files, and meta-data. As we’ve seen, each container is built from an image. That image tells Docker what the container holds, what process to run when the container is launched, and a variety of other configuration data. The Docker image is read-only. When Docker runs a container from an image, it adds a read-write layer on top of the image (using a union file system as we saw earlier) in which your application can then run.

3.1. Docker Machine

Machine makes it really easy to create Docker hosts on your computer, on cloud providers and inside your own data center. It creates servers, installs Docker on them, then configures the Docker client to talk to them.

Once your Docker host has been created, it then has a number of commands for managing containers:

  1. Start, stop, restart container

  2. Upgrade Docker

  3. Configure the Docker client to talk to a host

You used Docker Machine already during the attendee setup. We won’t need it too much further on. But if you need to create hosts, it’s a very handy tool to know about. From now on we’re mostly going to use the docker client.

Find out more about the details at the Docker Machine Website.

Check if docker machine is working:

docker-machine -v

It shows the output similar to the one shown below:

docker-machine version 0.4.1 (e2c88d6)
Note
The exact version may differ based upon how recently the installation was performed.

3.2. Docker Client

The client communicates with the demon process on your host and let’s you work with images and containers.

Check if your client is working using the following command:

docker -v

It shows the output similar to the following:

Docker version 1.8.2, build 0a8c2e3
Note
The exact version may differ based upon how recently the installation was performed.

The most important options you’ll be using frequently are:

  1. run - runs a container

  2. ps- lists containers

  3. stop - stops a container

  4. rm - Removes a container

Get a full list of available commands with

docker

A more comprehensive list of commands is also available in Common Docker Commands.

3.3. Verify Docker Configuration

Check if your Docker Host is running:

docker-machine ls

You should see the output similar to:

NAME        ACTIVE   DRIVER       STATE     URL                         SWARM
lab                  virtualbox   Running   tcp://192.168.99.101:2376

This machine is shown in “Running” state. If the machine state is stopped, start it with:

docker-machine start lab

After it is started you can find out IP address of your Docker Host with:

docker-machine ip lab

We already did this during the setup document, remember? So, this is a good chance to check, if you already added this IP to your hosts file.

Type:

ping dockerhost

and see if this resolves to the IP address that the docker-machine command printed out. You should see an output as:

> ping dockerhost
PING dockerhost (192.168.99.101): 56 data bytes
64 bytes from 192.168.99.101: icmp_seq=0 ttl=64 time=0.394 ms
64 bytes from 192.168.99.101: icmp_seq=1 ttl=64 time=0.387 ms

If it does, you’re ready to start with the lab.

4. Run Container

The first step in running any application on Docker is to run a container from an image. There are plenty of images available from the official Docker registry (aka Docker Hub). To run any of them, you just have to ask the Docker Client to run it. The client will check if the image already exists on Docker Host. If it exists then it’ll run it, otherwise the host will download the image and then run it.

4.1. Pull Image

Let’s first check, if any images are available:

docker images

At first, this list is empty. Now, let’s get a vanilla jboss/wildfly image:

docker pull classroom.example.com:5000/wildfly:latest

By default, docker images are retrieved from Docker Hub.

You can see, that Docker is downloading the image with it’s different layers.

Note

In a traditional Linux boot, the Kernel first mounts the root File System as read-only, checks its integrity, and then switches the whole rootfs volume to read-write mode. When Docker mounts the rootfs, it starts read-only, as in a traditional Linux boot, but then, instead of changing the file system to read-write mode, it takes advantage of a union mount to add a read-write file system over the read-only file system. In fact there may be multiple read-only file systems stacked on top of each other. Consider each one of these file systems as a layer.

At first, the top read-write layer has nothing in it, but any time a process creates a file, this happens in the top layer. And if something needs to update an existing file in a lower layer, then the file gets copied to the upper layer and changes go into the copy. The version of the file on the lower layer cannot be seen by the applications anymore, but it is there, unchanged.

We call the union of the read-write layer and all the read-only layers a union file system.

plain wildfly0
Figure 2. Docker Layers

In our particular case, the jboss/wildfly image extends the jboss/base-jdk:8 image which adds the OpenJDK distribution on top of the jboss/base image. The base image is used for all JBoss community images. It provides a base layer that includes:

  1. A jboss user (uid/gid 1000) with home directory set to /opt/jboss

  2. A few tools that may be useful when extending the image or installing software, like unzip.

The “jboss/base-jdk:8” image adds:

  1. OpenJDK 8 distribution

  2. Adds a JAVA_HOME environment variable

When the download is done, you can list the images again and will see the following:

docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
classroom.example.com:5000/wildfly   latest              7688aaf382ab        5 weeks ago         581.4 MB

4.2. Run Container

4.2.1. Interactive Container

Run WildFly container in an interactive mode.

docker run -it classroom.example.com:5000/wildfly

This will show the output as:

=========================================================================

  JBoss Bootstrap Environment

  JBOSS_HOME: /opt/jboss/wildfly

  JAVA: /usr/lib/jvm/java/bin/java

  JAVA_OPTS:  -server -XX:+UseCompressedOops  -server -XX:+UseCompressedOops -Xms64m -Xmx512m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true

=========================================================================

OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
00:44:43,895 INFO  [org.jboss.modules] (main) JBoss Modules version 1.4.3.Final
00:44:44,184 INFO  [org.jboss.msc] (main) JBoss MSC version 1.2.6.Final
00:44:44,267 INFO  [org.jboss.as] (MSC service thread 1-2) WFLYSRV0049: WildFly Full 9.0.0.Final (WildFly Core 1.0.0.Final) starting

. . .

00:46:54,241 INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0060: Http management interface listening on http://127.0.0.1:9990/management
00:46:54,243 INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0051: Admin console listening on http://127.0.0.1:9990
00:46:54,250 INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: WildFly Full 9.0.0.Final (WildFly Core 1.0.0.Final) started in 4256ms - Started 203 of 379 services (210 services are lazy, passive or on-demand)

This shows that the server started correctly, congratulations!

By default, Docker runs in the foreground. -i allows to interact with the STDIN and -t attach a TTY to the process. Switches can be combined together and used as -it.

Hit Ctrl+C to stop the container.

4.2.2. Detached Container

Restart the container in detached mode:

docker run -d classroom.example.com:5000/wildfly
972f51cc8422eec0a7ea9a804a55a2827b5537c00a6bfd45f8646cb764bc002a

-d, instead of -it, runs the container in detached mode.

The output is the unique id assigned to the container. Check the logs as:

> docker logs 972f51cc8422eec0a7ea9a804a55a2827b5537c00a6bfd45f8646cb764bc002a
=========================================================================

  JBoss Bootstrap Environment

  JBOSS_HOME: /opt/jboss/wildfly

. . .

We can check it by issuing the docker ps command which retrieves the images process which are running and the ports engaged by the process:

> docker ps
CONTAINER ID        IMAGE                                    COMMAND                CREATED             STATUS              PORTS                    NAMES
922abbb9c63a        classroom.example.com:5000/wildfly       "/opt/jboss/wildfly/   3 seconds ago       Up 2 seconds        8080/tcp            desperate_lovelace

Also try docker ps -a to see all the containers on this machine.

4.3. Run Container with Default Port

Startup log of the server shows that the server is located in the /opt/jboss/wildfly. It also shows that the public interfaces are bound to the 0.0.0.0 address while the admin interfaces are bound just to localhost. This information will be useful to learn how to customize the server.

docker-machine ip <machine-name> gives us the Docker Host IP address and this was already added to the hosts file. So, we can give it another try by accessing: http://dockerhost:8080. However, this will not work either.

If you want containers to accept incoming connections, you will need to provide special options when invoking docker run. The container, we just started, can’t be accessed by our browser. We need to stop it again and restart with different options.

docker stop `docker ps | grep wildfly | awk '{print $1}'`

Restart the container as:

docker run -d -P classroom.example.com:5000/wildfly

-P map any exposed ports inside the image to a random port on Docker host. This can be verified as:

> docker ps
CONTAINER ID        IMAGE                                    COMMAND                CREATED             STATUS              PORTS                     NAMES
63a69bff9c69        classroom.example.com:5000/wildfly       "/opt/jboss/wildfly/   14 seconds ago      Up 13 seconds       0.0.0.0:32768->8080/tcp   kickass_bohr

The port mapping is shown in the PORTS column. Access the WildFly server at http://dockerhost:32768. Make sure to use the correct port number as shown in your case.

Note
Exact port number may be different in your case.

4.4. Run Container with Specified Port

Lets stop the previously running container as:

docker stop `docker ps | grep wildfly | awk '{print $1}'`

Restart the container as:

docker run -it -p 8080:8080 classroom.example.com:5000/wildfly

The format is -p hostPort:containerPort. This option maps container ports to host ports and allows other containers on our host to access them.

Note
Docker Port Mapping

Port exposure and mapping are the keys to successful work with Docker. See more about networking on the Docker website Advanced Networking

Now we’re ready to test http://dockerhost:8080 again. This works with the exposed port, as expected.

plain wildfly1
Figure 3. Welcome WildFly

4.5. Enabling WildFly Administration

Default WildFly image exposes only port 8080 and thus is not available for administration using either the CLI or Admin Console. Lets expose the ports in different ways.

4.5.1. Default Port Mapping

The following command will override the default command in Docker file, start WildFly, and bind application and management port to all network interfaces.

docker run -P -d classroom.example.com:5000/wildfly /opt/jboss/wildfly/bin/standalone.sh -b 0.0.0.0 -bmanagement 0.0.0.0

Accessing WildFly Administration Console require a user in administration realm. A pre-created image, with appropriate username/password credentials, is used to start WildFly as:

docker run -P -d classroom.example.com:5000/wildfly-management

-P map any exposed ports inside the image to a random port on Docker host.

Look at the exposed ports as:

docker ps
CONTAINER ID        IMAGE                                           COMMAND                CREATED             STATUS              PORTS                                              NAMES
af7d6914a1f9        classroom.example.com:5000/wildfly-management   "/opt/jboss/wildfly/   2 seconds ago       Up 1 seconds        0.0.0.0:32770->8080/tcp, 0.0.0.0:32769->9990/tcp   happy_bardeen

Look for the host port that is mapped in the container, 32769 in this case. Access the admin console at http://dockerhost:32769.

Note
Exact port number may be different in your case.

The username/password credentials are:

Field Value

Username

admin

Password

docker#admin

This shows the admin console as:

wildfly admin console
Figure 4. Welcome WildFly
Additional Ways To Find Port Mapping

The exact mapped port can also be found as:

  1. Using docker port:

    docker port 6f610b310a46

    to see the output as:

    0.0.0.0:32769->8080/tcp
    0.0.0.0:32770->9990/tcp
  2. Using docker inspect:

    docker inspect --format='{{(index (index .NetworkSettings.Ports "9990/tcp") 0).HostPort}}' <CONTAINER ID>

4.5.2. Fixed Port Mapping

This management image can also be started with a pre-defined port mapping as:

docker run -p 8080:8080 -p 9990:9990 -d classroom.example.com:5000/wildfly-management

In this case, Docker port mapping will be shown as:

8080/tcp -> 0.0.0.0:8080
9990/tcp -> 0.0.0.0:9990

4.6. Stop and Remove Container

4.6.1. Stop Container

  1. Stop a specific container:

    docker stop <CONTAINER ID>
  2. Stop all the running containers

    docker stop $(docker ps -aq)
  3. Stop only the exited containers

    docker ps -a -f "exited=-1"

4.6.2. Remove Container

  1. Remove a specific container:

    docker rm 0bc123a8ece0
  2. Remove containers meeting a regular expression

    docker ps -a | grep wildfly | awk '{print $1}' | xargs docker rm
  3. Remove all containers, without any criteria

    docker rm $(docker ps -aq)

5. Deploy Java EE 7 Application (Pre-Built WAR)

Java EE 7 Movieplex is a standard multi-tier enterprise application that shows design patterns and anti-patterns for a typical Java EE 7 application.

javaee7 hol
Figure 5. Java EE 7 Application Architecture

Pull the Docker image that contains WildFly and pre-built Java EE 7 application WAR file as shown:

docker pull classroom.example.com:5000/javaee7-hol

The javaee7-hol Dockerfile is based on jboss/wildfly and adds the movieplex7 application as war file.

Run it:

docker run -it -p 8080:8080 classroom.example.com:5000/javaee7-hol

See the application in action at http://dockerhost:8080/movieplex7/. The output is shown:

javaee7 movieplex7
Figure 6. Java EE 7 Application Output

This uses an in-memory database with WildFly application server as shown in the image:

javaee7 hol in memory database
Figure 7. In-memory Database

Only two changes are required to the standard jboss/wildfly image:

  1. By default, WildFly starts in Web platform. This Java EE 7 application uses some capabilities from the Full Platform and so WildFly is started in that mode instead as:

    CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-c", "standalone-full.xml", "-b", "0.0.0.0"]
  2. WAR file is copied to the standalone/deployments directory as:

    RUN curl -L https://github.com/javaee-samples/javaee7-hol/raw/master/solution/movieplex7-1.0-SNAPSHOT.war -o /opt/jboss/wildfly/standalone/deployments/movieplex7-1.0-SNAPSHOT.war

6. Deploy Java EE 7 Application (Container Linking)

Deploy Java EE 7 Application (Pre-Built WAR) explained how to use an in-memory database with the application server. This gets you started rather quickly but becomes a bottleneck soon as the database is only in-memory. This means that any changes made to your schema and data are lost when the application server shuts down. In this case, you need to use a database server that resides outside the application server. For example, MySQL as the database server and WildFly as the application server.

javaee7 hol container linking
Figure 8. Two Containers On Same Docker Host

This section will show how Docker Container Linking can be used to connect to a service running inside a Docker container via a network port.

  1. Start MySQL server as:

    docker run --name mysqldb -e MYSQL_USER=mysql -e MYSQL_PASSWORD=mysql -e MYSQL_DATABASE=sample -e MYSQL_ROOT_PASSWORD=supersecret -p 3306:3306 -d mysql

    -e define environment variables that are read by the database at startup and allow us to access the database with this user and password.

  2. Start WildFly and deploy Java EE 7 application as:

    docker run -it --name mywildfly --link mysqldb:db -p 8080:8080 arungupta/wildfly-mysql-javaee7

    --link takes two parameters - first is name of the container we’re linking to and second is the alias for the link name.

    Note
    Container Linking

    Creating a link between two containers creates a conduit between a source container and a target container and securely transfer information about source container to target container.

    In our case, target container (WildFly) can see information about source container (MySQL). When containers are linked, information about a source container can be sent to a recipient container. This allows the recipient to see selected data describing aspects of the source container. For example, IP address of MySQL server is expoed at $DB_PORT_3306_TCP_ADDR and port of MySQL server is exposed at $DB_PORT_3306_TCP_PORT. These are then used to create the JDBC resource.

    See more about container communication on the Docker website Linking Containers Together

  3. See the output as:

    > curl http://dockerhost:8080/employees/resources/employees
    <?xml version="1.0" encoding="UTF-8" standalone="yes"?><collection><employee><id>1</id><name>Penny</name></employee><employee><id>2</id><name>Sheldon</name></employee><employee><id>3</id><name>Amy</name></employee><employee><id>4</id><name>Leonard</name></employee><employee><id>5</id><name>Bernadette</name></employee><employee><id>6</id><name>Raj</name></employee><employee><id>7</id><name>Howard</name></employee><employee><id>8</id><name>Priya</name></employee></collection>

7. Build and Deploy Java EE 7 Application

Java EE 7 Simple Sample is a trivial Java EE 7 sample application.

7.1. Build Application

  1. Clone the repo:

    git clone https://github.com/javaee-samples/javaee7-simple-sample.git
  2. Build the application:

    mvn clean package

7.2. Start Application Server

Start WildFly server as:

docker run --name wildfly -d -p 8080:8080 -v /Users/youruser/tmp/deployments:/opt/jboss/wildfly/standalone/deployments/:rw jboss/wildfly

Make sure to replace /Users/youruser/tmp/deployments to a directory on your local machine. Also, make sure this directory already exists. For example, on my machine this directory is /Users/arungupta/tmp/deployments.

This command starts a container named “wildfly”.

The -v flag maps a directory from the host into the container. This will be the directory to put the deployments. rw ensures that the Docker container can write to it.

Warning
Windows users, please make sure to use -v /c/Users/ notation for drive letters.

Check logs to verify if the server has started.

docker logs -f wildfly

Access http://dockerhost:8080 in your browser to make sure the instance is up and running.

Now you’re ready to deploy the application for the first time.

7.3. Configure JBoss Developer Studio

Start JBoss Developer Studio, if not already started.

  1. Select ‘Servers’ tab, create a new server adapter

    jbds1
    Figure 9. Server adapter
  2. Assign an existing or create a new WildFly 9.0.0 runtime (changed properties are highlighted.)

    jbds2
    Figure 10. WildFly Runtime Properties
  3. If a new runtime needs to be created, pick the directory for WildFly 9.0.0:

    jbds3
    Figure 11. WildFly 9.0.0.Final Runtime

    Click on ‘Finish’.

  4. Double-click on the newly selected server to configure server properties:

    jbds4
    Figure 12. Server properties

    The host name is specified to ‘dockerhost’. Two properties on the left are automatically propagated from the previous dialog. Additional two properties on the right side are required to disable to keep deployment scanners in sync with the server.

  5. Specify a custom deployment folder on Deployment tab of Server Editor

    jbds5
    Figure 13. Custom deployment folder
  6. Right-click on the newly created server adapter and click ‘Start’.

    jbds6
    Figure 14. Started server

7.4. Deploy Application Using Shared Volumes

  1. Import javaee7-simple-sample application source code using Import → Existing Maven Projects.

  2. Right-click on the project, select ‘Run on Server’ and chose the previously created server.

The project runs and displays the start page of the application.

jbds7
Figure 15. Start Server

Congratulations!

You’ve deployed your first application to WildFly running in a Docker container from JBoss Developer Studio.

Stop WildFly container when you’re done.

docker stop wildfly

7.5. Deploy Application Using CLI

The Command Line Interface (CLI) is a tool for connecting to WildFly instances to manage all tasks from command line environment. Some of the tasks that you can do using the CLI are:

  1. Deploy/Undeploy web application in standalone/Domain Mode.

  2. View all information about the deployed application on runtime.

  3. Start/Stop/Restart Nodes in respective mode i.e. Standalone/Domain.

  4. Adding/Deleting resource or subsystems to servers.

Lets use the CLI to deploy javaee7-simple-sample to WildFly running in the container.

  1. CLI needs to be locally installed and comes as part of WildFly. This should be available in the previously downloaded WildFly. Unzip into a folder of your choice (e.g. /Users/arungupta/tools/). This will create wildfly-9.0.0.Final directory here. This folder is referred to $WIDLFY_HOME from here on. Make sure to add the /Users/arungupta/tools/wildfly-9.0.0.Final/bin to your $PATH.

  2. Run the “wildfly-management” image with fixed port mapping as explained in Fixed Port Mapping.

  3. Run the jboss-cli command and connect to the WildFly instance.

    jboss-cli.sh --controller=dockerhost:9990  -u=admin -p=docker#admin -c

    This will show the output as:

    [standalone@dockerhost:9990 /]
  4. Deploy the application as:

    deploy <javaee7-simple-sample PATH>target/javaee7-simple-sample-1.10.war --force

Now you’ve sucessfully used the CLI to remote deploy the Java EE 7 sample application to WildFly running as docker container.

7.6. Deploy Application Using Web Console

WildFly comes with a web-based administration console. It also relies on the same management APIs that are used by JBoss Developer Tools and the CLI. It provides a simple and easy to use web-based console to manage WildFly instance. For a Docker image, it needs to be explicitly enabled as explained in Enabling WildFly Administration. Once enabled, it can be accessed at http://dockerhost:9990.

console1
Figure 16. WildFly Web Console

Username and password credentials are shown in [WildFly_Administration_Credentials].

Note

You may like to stop and remove the Docker container running WildFly. This can be done as docker ps -a | grep wildfly | awk '{print $1}' | xargs docker rm -f.

Start a new container as docker run -d --name wildfly -p 8080:8080 -p 9990:9990 arungupta/wildfly-management.

Deploy the application using the console with the following steps:

  1. Go to ‘Deployments’ tab.

    wildfly9 deployments tab
    Figure 17. Deployments tab in WildFly Web Console
  2. Click on ‘Add’ button.

  3. On ‘Add Deployment’ screen, take the default of ‘Upload a new deployment’ and click ‘Next>>’.

  4. Click on ‘Choose File’, select <javaee7-simple-sample PATH>/javaee7-simple-sample.war file on your computer. This would be javaee7-simple-sample/target/javaee7-simple-sample.war from Build Application.

  5. Click on ‘Next>>’.

  6. Select ‘Enable’ checkbox.

    wildfly9 add deployments
    Figure 18. Enable a deployment
  7. Click ‘Finish’.

    wildfly9 javaee7 simple sample deployed
    Figure 19. Java EE 7 Simple Sample Deployed

This will complete the deployment of the Java EE 7 application using Web Console. The output can be seen out http://dockerhost:8080/javaee7-simple-sample and looks like:

wildfly9 javaee7 simple sample output
Figure 20. Java EE 7 Simple Sample Output

7.7. Deploy Application Using Management API

A standalone WildFly process, process can be configured to listen for remote management requests using its “native management interface”. The CLI tool that comes with the application server uses this interface, and user can develop custom clients that use it as well. By default, WildFly management interface listens on 127.0.0.1. When running inside a Docker container, the network interface should be bound to all publicly assigned addresses. This can be easily changed by biding to 0.0.0.0 instead of 127.0.0.1.

  1. Start another WildFly instance again:

    docker run -d --name wildfly -p 8080:8080 -p 9990:9990 arungupta/wildfly-management

    In addition to application port 8080, the administration port 9990 is exposed as well. The WildFly image that is used has tweaked the start script such that the management interface is bound to 0.0.0.0.

  2. Create a new server adapter in JBoss Developer Studio and name it “WildFly 9.0.0-Management”. Specify the host name as ‘dockerhost’.

    jbds8
  3. Click on ‘Next>’ and change the values as shown.

    jbds9
    Figure 21. Create New Server Adapter
  4. Take the default values in ‘Remote System Integration’ and click on ‘Finish’.

  5. Change server properties by double clicking on the newly created server adapter. Specify admin credentials (username: docker, password: docker#admin). Note, you need to delete the existing password and use this instead:

    jbds10
    Figure 22. Management Login Credentials
  6. Right-click on the newly created server adapter and click ‘Start’. Status quickly changes to ‘Started’ as shown.

    jbds11
    Figure 23. Synchronized WildFly Server
  7. Right-click on the javaee7-simple-sample project, select ‘Run on Server’ and choose this server. The project runs and displays the start page of the application.

  8. Stop WildFly when you’re done.

    docker stop wildfly

8. Docker Maven Plugin

Maven plugin allows you to manage Docker images and containers from pom.xml. It comes with predefined goals:

Goal Description

docker:start

Create and start containers

docker:stop

Stop and destroy containers

docker:build

Build images

docker:push

Push images to a registry

docker:remove

Remove images from local docker host

docker:logs

Show container logs

8.1. Run Java EE Application

  1. Clone the workspace as:

    git clone https://github.com/javaee-samples/javaee7-docker-maven.git
  2. Build the image as:

    mvn package -Pdocker
  3. Verify the image as:

    docker images
    REPOSITORY                        TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
    arungupta/javaee7-docker-maven    latest              2e51b3fca40f        4 seconds ago       581.5 MB
  4. Run the container as:

    mvn install -Pdocker
  5. Access your application at http://dockerhost:8080/javaee7-docker-maven/resources/persons. It shows the output as:

    curl http://dockerhost:8080/javaee7-docker-maven/resources/persons
    <?xml version="1.0" encoding="UTF-8" standalone="yes"?><collection><person><name>Penny</name></person><person><name>Leonard</name></person><person><name>Sheldon</name></person><person><name>Amy</name></person><person><name>Howard</name></person><person><name>Bernadette</name></person><person><name>Raj</name></person><person><name>Priya</name></person></collection>

8.2. Understand Plugin Configuration

pom.xml is updated to include docker-maven-plugin as:

<plugin>
    <groupId>org.jolokia</groupId>
    <artifactId>docker-maven-plugin</artifactId>
    <version>0.11.5</version>
    <configuration>
        <images>
            
        </images>
    </configuration>
    <executions>
        <execution>
            <id>docker:build</id>
            <phase>package</phase>
            <goals>
                <goal>build</goal>
            </goals>
        </execution>
        <execution>
            <id>docker:start</id>
            <phase>install</phase>
            <goals>
                <goal>start</goal>
            </goals>
        </execution>
    </executions>
</plugin>

Each image configuration has three parts:

  1. Image name and alias

  2. <build> that defines how the image is created. Base image, build artifacts and their dependencies, ports to be exposed, etc to be included in the image are specified here. Assembly descriptor format is used to specify the artifacts to be included and is defined in the src/main/docker directory.

    assembly.xml in our case looks like:

    <assembly . . .>
      <id>javaee7-docker-maven</id>
      <dependencySets>
        <dependencySet>
          <includes>
            <include>org.javaee7.sample:javaee7-docker-maven</include>
          </includes>
          <outputDirectory>/opt/jboss/wildfly/standalone/deployments/</outputDirectory>
          <outputFileNameMapping>javaee7-docker-maven.war</outputFileNameMapping>
        </dependencySet>
      </dependencySets>
    </assembly>
  3. <run> that defines how the container is run. Ports that need to be exposed are specified here.

In addition, package phase is tied to docker:build goal and install phase is tied to docker:start goal.

9. Docker Tools in Eclipse

The Docker tooling is aimed at providing at minimum the same basic level features as the command-line interface, but also provide some advantages by having access to a full fledged UI.

9.1. Install Docker Tools Plugins

As this is still in early access stage, you will have to install it first:

  1. Use JBoss Developer Studio 9.0 Beta 2.

    Alternatively, download Eclipse Mars latest build and configure JBoss Tools plugin from the update site http://download.jboss.org/jbosstools/updates/nightly/mars/.

  2. Open JBoss Developer Studio 9.0 Nightly

  3. Add a new site using the menu items: ‘Help’ > ‘Install New Software…​’ > ‘Add…​’.

    Specify the ‘Name:’ as “Docker Nightly” and ‘Location:’ as http://download.eclipse.org/linuxtools/updates-docker-nightly/.

    jbds docker tools1
    Figure 24. Add Docker Tooling To JBoss Developer Studio
  4. Expand Linux Tools, select ‘Docker Client’ and ‘Docker Tooling’.

    jbds docker tools nightly setup
    Figure 25. Add Docker Tooling
  5. Click on ‘Next >’, ‘Next >’, accept the terms of the license agreement, and click on ‘Finish’. This will complete the installation of plugins.

    Restart the IDE for the changes to take effect.

9.2. Docker Explorer

The Docker Explorer provides a wizard to establish a new connection to a Docker daemon. This wizard can detect default settings if the user’s machine runs Docker natively (such as in Linux) or in a VM using Boot2Docker (such as in Mac or Windows). Both Unix sockets on Linux machines and the REST API on other OSes are detected and supported. The wizard also allows remote connections using custom settings.

  1. Use the menu ‘Window’, ‘Show View’, ‘Other…​’. Type ‘docker’ to see the output as:

    jbds docker tools docker view
  2. Select ‘Docker Explorer’ to open Docker Explorer.

    jbds docker tools docker explorer view
  3. Click on the link in this window to create a connection to Docker Host. Specify the settings as shown:

    jbds docker tools2
    Figure 26. Docker Explorer

    Make sure to get IP address of the Docker Host as:

    docker-machine ip lab

    Also, make sure to specify the correct directory for .docker on your machine.

  4. Click on ‘Test Connection’ to check the connection. This should show the output as:

    jbds docker tools test connection output
    Figure 27. Docker Explorer

    Click on ‘OK’ and ‘Finish’ to exit out of the wizard.

  5. Docker Explorer itself is a tree view that handles multiple connections and provides users with quick overview of the existing images and containers.

    jbds docker tools3
    Figure 28. Docker Explorer Tree View
  6. Customize the view by clicking on the arrow in toolbar:

    jbds docker tools customize view option
    Figure 29. Docker Explorer Customize View

    Built-in filters can show/hide intermediate and ‘dangling’ images, as well as stopped containers.

    jbds docker tools customize view wizard
    Figure 30. Docker Explorer Customize View Wizard

9.3. Docker Images

The Docker Images view lists all images in the Docker host selected in the Docker Explorer view. This view allows user to manage images, including:

  1. Pull/push images from/to the Docker Hub Registry (other registries will be supported as well, #469306)

  2. Build images from a Dockerfile

  3. Create a container from an image

Lets take a look at it.

  1. Use the menu ‘Window’, ‘Show View’, ‘Other…​’, select ‘Docker Images’. It shows the list of images on Docker Host:

    jbds docker tools4
    Figure 31. Docker Images View
  2. Right-click on the image ending with “wildfly:latest” and click on the green arrow in the toolbar. This will show the following wizard:

    jbds docker tools run container wizard
    Figure 32. Docker Run Container Wizard

    By default, all exports ports from the image are mapped to random ports on the host interface. This setting can be changed by unselecting the first checkbox and specify exact port mapping.

    Click on ‘Finish’ to start the container.

  3. When the container is started, all logs are streamed into Eclipse Console:

    jbds docker tools5
    Figure 33. Docker Container Logs

9.4. Docker Containers

Docker Containers view lets the user manage the containers. The view toolbar provides commands to start, stop, pause, unpause, display the logs and kill containers.

  1. Use the menu ‘Window’, ‘Show View’, ‘Other…​’, select ‘Docker Containers’. It shows the list of running containers on Docker Host:

    jbds docker tools6
    Figure 34. Docker Containers View
  2. Pause the container by clicking on the “pause” button in the toolbar (#469310). Show the complete list of containers by clicking on the ‘View Menu’, ‘Show all containers’.

    jbds docker tools all containers
    Figure 35. All Docker Containers
  3. Select the paused container, and click on the green arrow in the toolbar to restart the container.

  4. Right-click on any running container and select “Display Log” to view the log for this container.

    jbds docker tools display log
    Figure 36. Eclipse Properties View

TODO: Users can also attach an Eclipse console to a running Docker container to follow the logs and use the STDIN to interact with it.

9.5. Details on Images and Containers

Eclipse Properties view is used to provide more information about the containers and images.

  1. Just open the Properties View and click on a Connection, Container, or Image in any of the Docker Explorer View, Docker Containers View, or Docker Images View. This will fill in data in the Properties view.

    Info view is shown as:

    jbds docker tools properties info
    Figure 37. Docker Container Properties View Info

    Inspect view is shown as:

    jbds docker tools properties inspect
    Figure 38. Docker Container Properties View Inspect

10. Test Java EE Applications on Docker

Testing Java EE applications is a very important aspect. Especially when it comes to in-container tests, JBoss Arquillian is well known to make this very easy for Java EE application servers. Picking up where unit tests leave off, Arquillian handles all the plumbing of container management, deployment and framework initialization so you can focus on the task at hand, writing your tests.

With Arquillian, you can use WildFly remote container adapter and connect to any WildFly instance running in a Docker container. But this wouldn’t help with the Docker container lifycycle management.

Arquillian Cube, an extension of Arquillian, allows you to control the lifecycle of Docker images as part of the test lifecyle, either automatically or manually. This extension allows to start a Docker container with a server installed, deploy the required deployable file within it and execute Arquillian tests.

The key point here is that if Docker is used as deployable platform in production, your tests are executed in a the same container as it will be in production, so your tests are even more real than before.

  1. Check out the workspace:

    git clone http://github.com/javaee-samples/javaee-arquillian-cube
  2. Edit src/test/resources/arquillian.xml file and change the IP address specified in serverUri property value to point to your Docker host’s IP. This can be found out as:

    docker-machine ip lab
  3. Run the tests as:

    mvn test

    This will create a container using the image defined in src/test/resources/wildfly/Dockerfile. The container qualifier in arquillian.xml defines the directory name in src/test/resources directory.

    Note

    A pre-built image can be used by specifying:

    wildfly:
      image: jboss/wildfly

    instead of

    wildfly:
      buildImage:
        dockerfileLocation: src/test/resources/wildfly

    By default, the “cube” profile is activated and this includes all the required dependencies.

    The result is shown as:

    Running org.javaee7.sample.PersonDatabaseTest
    Jun 16, 2015 9:23:04 AM org.jboss.arquillian.container.impl.MapObject populate
    WARNING: Configuration contain properties not supported by the backing object org.jboss.as.arquillian.container.remote.RemoteContainerConfiguration
    Unused property entries: {target=wildfly:8.1.0.Final:remote}
    Supported property names: [managementAddress, password, managementPort, managementProtocol, username]
    Jun 16, 2015 9:23:13 AM org.xnio.Xnio <clinit>
    INFO: XNIO version 3.2.0.Beta4
    Jun 16, 2015 9:23:13 AM org.xnio.nio.NioXnio <clinit>
    INFO: XNIO NIO Implementation Version 3.2.0.Beta4
    Jun 16, 2015 9:23:13 AM org.jboss.remoting3.EndpointImpl <clinit>
    INFO: JBoss Remoting version (unknown)
    Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.406 sec - in org.javaee7.sample.PersonDatabaseTest
    
    Results :
    
    Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
    
    [INFO] ------------------------------------------------------------------------
    [INFO] BUILD SUCCESS
    [INFO] ------------------------------------------------------------------------
  4. In arquillian.xml, add the following property:

    <property name="connectionMode">STARTORCONNECT</property>

    This bypasses the create/start Cube commands if a Docker Container with the same name is already running on the target system.

    This allows you to prestart the containers manually during development and just connect to them to avoid the extra cost of starting the Docker Containers for each test run. This assumes you are not changing the actual definition of the Docker Container itself.

11. Multiple Containers Using Docker Compose

Docker Compose is a tool for defining and running complex applications with Docker. With Compose, you define a multi-container application in a single file, then spin your application up in a single command which does everything that needs to be done to get it running.
— github.com/docker/compose

An application using Docker containers will typically consist of multiple containers. With Docker Compose, there is no need to write shell scripts to start your containers. All the containers are defined in a configuration file using services, and then docker-compose script is used to start, stop, and restart the application and all the services in that application, and all the containers within that service. The complete list of commands is:

Command Purpose

build

Build or rebuild services

help

Get help on a command

kill

Kill containers

logs

View output from containers

port

Print the public port for a port binding

ps

List containers

pull

Pulls service images

restart

Restart services

rm

Remove stopped containers

run

Run a one-off command

scale

Set number of containers for a service

start

Start services

stop

Stop services

up

Create and start containers

Docker Compose script is only available for OSX and Linux. https://github.com/arun-gupta/docker-java/issues/3 is used for tracking Docker Compose on Windows.

11.1. Configuration File

  1. Entry point to Compose is docker-compose.yml. Lets use the following file:

    mysqldb:
      image: mysql
      environment:
        MYSQL_DATABASE: sample
        MYSQL_USER: mysql
        MYSQL_PASSWORD: mysql
        MYSQL_ROOT_PASSWORD: supersecret
    mywildfly:
      image: arungupta/wildfly-mysql-javaee7
      links:
        - mysqldb:db
      ports:
        - 8080:8080
    1. Two services defined by the name mysqldb and mywildfly

    2. Image name for each service defined using image

    3. Environment variables for the MySQL container are defined in environment

    4. MySQL container is linked with WildFly container using links

    5. Port forwarding is achieved using ports

11.2. Start Services

  1. All services can be started, in detached mode, by giving the command:

    docker-compose up -d

    And this shows the output as:

    Creating attendees_mysqldb_1...
    Creating attendees_mywildfly_1...

    An alternate compose file name can be specified using -f.

    An alternate directory where the compose file exists can be specified using -p.

  2. Started services can be verified as:

    > docker-compose ps
            Name                       Command               State                Ports
    -------------------------------------------------------------------------------------------------
    attendees_mysqldb_1     /entrypoint.sh mysqld            Up      3306/tcp
    attendees_mywildfly_1   /opt/jboss/wildfly/customi ...   Up      0.0.0.0:8080->8080/tcp, 9990/tcp

    This provides a consolidated view of all the services started, and containers within them.

    Alternatively, the containers in this application, and any additional containers running on this Docker host can be verified by using the usual docker ps command:

    > docker ps
    CONTAINER ID        IMAGE                                    COMMAND                CREATED             STATUS              PORTS                              NAMES
    3598e545bd2f        arungupta/wildfly-mysql-javaee7:latest   "/opt/jboss/wildfly/   59 seconds ago      Up 58 seconds       0.0.0.0:8080->8080/tcp, 9990/tcp   attendees_mywildfly_1
    b8cf6a3d518b        mysql:latest                             "/entrypoint.sh mysq   2 minutes ago       Up 2 minutes        3306/tcp                           attendees_mysqldb_1
  3. Service logs can be seen as:

    > docker-compose logs
    Attaching to attendees_mywildfly_1, attendees_mysqldb_1
    mywildfly_1 | => Starting WildFly server
    mywildfly_1 | => Waiting for the server to boot
    mywildfly_1 | =========================================================================
    mywildfly_1 |
    mywildfly_1 |   JBoss Bootstrap Environment
    mywildfly_1 |
    mywildfly_1 |   JBOSS_HOME: /opt/jboss/wildfly
    mywildfly_1 |
    mywildfly_1 |   JAVA: /usr/lib/jvm/java/bin/java
    mywildfly_1 |
    mywildfly_1 |   JAVA_OPTS:  -server -Xms64m -Xmx512m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true
    mywildfly_1 |
    
    . . .
    
    mywildfly_1 | 15:40:20,866 INFO  [org.jboss.resteasy.spi.ResteasyDeployment] (MSC service thread 1-2) Deploying javax.ws.rs.core.Application: class org.javaee7.samples.employees.MyApplication
    mywildfly_1 | 15:40:20,914 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-2) JBAS017534: Registered web context: /employees
    mywildfly_1 | 15:40:21,032 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 28) JBAS018559: Deployed "employees.war" (runtime-name : "employees.war")
    mywildfly_1 | 15:40:21,077 INFO  [org.jboss.as] (Controller Boot Thread) JBAS015961: Http management interface listening on http://127.0.0.1:9990/management
    mywildfly_1 | 15:40:21,077 INFO  [org.jboss.as] (Controller Boot Thread) JBAS015951: Admin console listening on http://127.0.0.1:9990
    mywildfly_1 | 15:40:21,077 INFO  [org.jboss.as] (Controller Boot Thread) JBAS015874: WildFly 8.2.0.Final "Tweek" started in 9572ms - Started 280 of 334 services (92 services are lazy, passive or on-demand)
    mysqldb_1   | Running mysql_install_db
    mysqldb_1   | 2015-06-05 15:38:31 0 [Note] /usr/sbin/mysqld (mysqld 5.6.25) starting as process 27 ...
    mysqldb_1   | 2015-06-05 15:38:31 27 [Note] InnoDB: Using atomics to ref count buffer pool pages
    
    . . .
    
    mysqldb_1   | 2015-06-05 15:38:40 1 [Note] Event Scheduler: Loaded 0 events
    mysqldb_1   | 2015-06-05 15:38:40 1 [Note] mysqld: ready for connections.
    mysqldb_1   | Version: '5.6.25'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  MySQL Community Server (GPL)
    mysqldb_1   | 2015-06-05 15:40:18 1 [Warning] IP address '172.17.0.24' could not be resolved: Name or service not known

11.3. Verify Application

  1. Access the application at http://dockerhost:8080/employees/resources/employees/. This is shown in the browser as:

docker compose output
Figure 39. Output From Servers Run Using Docker Compose

11.4. Stop Services

Stop the services as:

docker-compose stop
Stopping attendees_mywildfly_1...
Stopping attendees_mysqldb_1...
Warning

Stopping and starting the containers again will give the following error:

wildfly_1 |
wildfly_1 | 09:11:07,802 ERROR [org.jboss.as.controller.management-operation] (management-handler-thread - 4) JBAS014613: Operation ("add") failed - address: ([
wildfly_1 |     ("subsystem" => "datasources"),
wildfly_1 |     ("jdbc-driver" => "mysql")
wildfly_1 | ]) - failure description: "JBAS014803: Duplicate resource [
wildfly_1 |     (\"subsystem\" => \"datasources\"),
wildfly_1 |     (\"jdbc-driver\" => \"mysql\")
wildfly_1 | ]"

This is expected because the JDBC resource is created during every run of the container. In a real-world application, this would be pre-baked in the configuration already.

11.5. Remove Containers

Stop the services as:

docker-compose rm -f
Going to remove attendees_mywildfly_1, attendees_mysqldb_1
Removing attendees_mywildfly_1... done
Removing attendees_mysqldb_1... done

12. Java EE Application on Docker Swarm Cluster

Docker Swarm is native clustering for Docker. It allows you create and access to a pool of Docker hosts using the full suite of Docker tools. Because Docker Swarm serves the standard Docker API, any tool that already communicates with a Docker daemon can use Swarm to transparently scale to multiple hosts

12.1. Key Components of Docker Swarm

docker swarm components
Figure 40. Key Components of Docker Swarm

Swarm Manager: Docker Swarm has a Manager, that is a pre-defined Docker Host, and is a single point for all administration. The swarm manager orchestrates and schedules containers on the entire cluster. Currently only a single instance of manager is allowed in the cluster. This is a SPOF for high availability architectures and additional managers will be allowed in a future version of Swarm with #598.

Swarm Nodes: The containers are deployed on Nodes that are additional Docker Hosts. Each Swarm Node must be accessible by the manager, each node must listen to the same network interface (TCP port). Each node runs a Docker Swarm agent that registers the referenced Docker daemon, monitors it, and updates the discovery backend with the node’s status. The containers run on a node.

Scheduler Strategy: Different scheduler strategies (“binpack”, “spread” (default), and “random”) can be applied to pick the best node to run your container. The default strategy optimizes the node for least number of running containers. There are multiple kinds of filters, such as constraints and affinity. This should allow for a decent scheduling algorithm.

Node Discovery Service: By default, Swarm uses hosted discovery service, based on Docker Hub, using tokens to discover nodes that are part of a cluster. However etcd, consul, and ZooKeeper can be also be used for service discovery as well. This is particularly useful if there is no access to Internet, or you are running the setup in a closed network. A new discovery backend can be created as explained here. It would be useful to have the hosted Discovery Service inside the firewall and #660 will discuss this.

Standard Docker API: Docker Swarm serves the standard Docker API and thus any tool that talks to a single Docker host will seamlessly scale to multiple hosts now. That means that if you were using shell scripts using Docker CLI to configure multiple Docker hosts, the same CLI would can now talk to Swarm cluster and Docker Swarm will then act as proxy and run it on the cluster.

There are lots of other concepts but these are the main ones.

12.2. Create a Docker Swarm Cluster

  1. The easiest way of using Swarm is, by using the official Docker image:

    docker run swarm create

    This command returns a discovery token, referred as <TOKEN> in this document, and is the unique cluster id. It will be used when creating master and nodes later. This cluster id is returned by the hosted discovery service on Docker Hub.

    It shows the output as:

    docker run swarm create
    Unable to find image 'swarm:latest' locally
    latest: Pulling from swarm
    55b38848634f: Pull complete
    fd7bc7d11a30: Pull complete
    db039e91413f: Pull complete
    1e5a49ab6458: Pull complete
    5d9ce3cdadc7: Pull complete
    1f26e949f933: Pull complete
    e08948058bed: Already exists
    swarm:latest: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security.
    Digest: sha256:0e417fe3f7f2c7683599b94852e4308d1f426c82917223fccf4c1c4a4eddb8ef
    Status: Downloaded newer image for swarm:latest
    1d528bf0568099a452fef5c029f39b85

    The last line is the <TOKEN>.

    Note
    Make sure to note this cluster id now as there is no means to list it later. This should be fixed with #661.
  2. Swarm is fully integrated with Docker Machine, and so is the easiest way to get started. Let’s create a Swarm Master next:

    docker-machine create -d virtualbox --swarm --swarm-master --swarm-discovery token://<TOKEN> swarm-master

    Replace <TOKEN> with the cluster id obtained in the previous step.

    --swarm configures the machine with Swarm, --swarm-master configures the created machine to be Swarm master. Swarm master creation talks to the hosted service on Docker Hub and informs that a master is created in the cluster.

  3. Connect to this newly created master and find some more information about it:

    eval "$(docker-machine env swarm-master)"
    docker info
    Note
    If you’re on Windows, use the docker-machine env swarm-master command only and copy the output into an editor to replace all appearances of EXPORT with SET and issue the three commands at your command prompt, remove the quotes and all duplicate appearences of "/".

    This will show the output as:

    > docker info
    Containers: 2
    Images: 7
    Storage Driver: aufs
     Root Dir: /mnt/sda1/var/lib/docker/aufs
     Backing Filesystem: extfs
     Dirs: 11
     Dirperm1 Supported: true
    Execution Driver: native-0.2
    Logging Driver: json-file
    Kernel Version: 4.0.5-boot2docker
    Operating System: Boot2Docker 1.7.0 (TCL 6.3); master : 7960f90 - Thu Jun 18 18:31:45 UTC 2015
    CPUs: 1
    Total Memory: 996.2 MiB
    Name: swarm-master
    ID: DLFR:OQ3E:B5P6:HFFD:VKLI:IOLU:URNG:HML5:UHJF:6JCL:ITFH:DS6J
    Debug mode (server): true
    File Descriptors: 22
    Goroutines: 36
    System Time: 2015-07-11T00:16:34.29965306Z
    EventsListeners: 1
    Init SHA1:
    Init Path: /usr/local/bin/docker
    Docker Root Dir: /mnt/sda1/var/lib/docker
    Username: arungupta
    Registry: https://index.docker.io/v1/
    Labels:
     provider=virtualbox
  4. Create a Swarm node

    docker-machine create -d virtualbox --swarm --swarm-discovery token://<TOKEN> swarm-node-01

    Replace <TOKEN> with the cluster id obtained in an earlier step.

    Node creation talks to the hosted service at Docker Hub and joins the previously created cluster. This is specified by --swarm-discovery token://…​ and specifying the cluster id obtained earlier.

  5. To make it a real cluster, let’s create a second node:

    docker-machine create -d virtualbox --swarm --swarm-discovery token://<TOKEN> swarm-node-02

    Replace <TOKEN> with the cluster id obtained in the previous step.

  6. List all the nodes created so far:

    docker-machine ls

    This shows the output similar to the one below:

    docker-machine ls
    NAME            ACTIVE   DRIVER       STATE     URL                         SWARM
    lab                      virtualbox   Running   tcp://192.168.99.101:2376
    summit2015               virtualbox   Running   tcp://192.168.99.100:2376
    swarm-master    *        virtualbox   Running   tcp://192.168.99.102:2376   swarm-master (master)
    swarm-node-01            virtualbox   Running   tcp://192.168.99.103:2376   swarm-master
    swarm-node-02            virtualbox   Running   tcp://192.168.99.104:2376   swarm-master

    The machines that are part of the cluster have the cluster’s name in the SWARM column, blank otherwise. For example, “lab” and “summit2015” are standalone machines where as all other machines are part of the “swarm-master” cluster. The Swarm master is also identified by (master) in the SWARM column.

  7. Connect to the Swarm cluster and find some information about it:

    eval "$(docker-machine env --swarm swarm-master)"
    docker info

    This shows the output as:

    > docker info
    Containers: 4
    Images: 3
    Role: primary
    Strategy: spread
    Filters: affinity, health, constraint, port, dependency
    Nodes: 3
     swarm-master: 192.168.99.102:2376
      └ Containers: 2
      └ Reserved CPUs: 0 / 1
      └ Reserved Memory: 0 B / 1.022 GiB
      └ Labels: executiondriver=native-0.2, kernelversion=4.0.5-boot2docker, operatingsystem=Boot2Docker 1.7.0 (TCL 6.3); master : 7960f90 - Thu Jun 18 18:31:45 UTC 2015, provider=virtualbox, storagedriver=aufs
     swarm-node-01: 192.168.99.103:2376
      └ Containers: 1
      └ Reserved CPUs: 0 / 1
      └ Reserved Memory: 0 B / 1.022 GiB
      └ Labels: executiondriver=native-0.2, kernelversion=4.0.5-boot2docker, operatingsystem=Boot2Docker 1.7.0 (TCL 6.3); master : 7960f90 - Thu Jun 18 18:31:45 UTC 2015, provider=virtualbox, storagedriver=aufs
     swarm-node-02: 192.168.99.104:2376
      └ Containers: 1
      └ Reserved CPUs: 0 / 1
      └ Reserved Memory: 0 B / 1.022 GiB
      └ Labels: executiondriver=native-0.2, kernelversion=4.0.5-boot2docker, operatingsystem=Boot2Docker 1.7.0 (TCL 6.3); master : 7960f90 - Thu Jun 18 18:31:45 UTC 2015, provider=virtualbox, storagedriver=aufs
    CPUs: 3
    Total Memory: 3.065 GiB

    There are 3 nodes – one Swarm master and 2 Swarm nodes. There is a total of 4 containers running in this cluster – one Swarm agent on master and each node, and there is an additional swarm-agent-master running on the master. This can be verified by connecting to the master and listing all the containers.

  8. List nodes in the cluster with the following command:

    docker run swarm list token://<TOKEN>

    This shows the output as:

    > docker run swarm list token://1d528bf0568099a452fef5c029f39b85
    192.168.99.103:2376
    192.168.99.104:2376
    192.168.99.102:2376

12.3. Deploy Java EE Application to Docker Swarm Cluster

The complete cluster is in place now, and we need to deploy the Java EE application to it.

Swarm takes care for the distribution of deployments across the nodes. The only thing, we need to do is to deploy the application as already explained in Deploy Java EE 7 Application (Container Linking).

  1. Start MySQL server as:

    docker run --name mysqldb -e MYSQL_USER=mysql -e MYSQL_PASSWORD=mysql -e MYSQL_DATABASE=sample -e MYSQL_ROOT_PASSWORD=supersecret -p 3306:3306 -d mysql

    -e define environment variables that are read by the database at startup and allow us to access the database with this user and password.

  2. Start WildFly and deploy Java EE 7 application as:

    docker run -d --name mywildfly --link mysqldb:db -p 8080:8080 arungupta/wildfly-mysql-javaee7

    This is using the Docker Container Linking explained earlier.

  3. Check state of the cluster as:

    > docker info
    Containers: 7
    Images: 5
    Role: primary
    Strategy: spread
    Filters: affinity, health, constraint, port, dependency
    Nodes: 3
     swarm-master: 192.168.99.102:2376
      └ Containers: 2
      └ Reserved CPUs: 0 / 1
      └ Reserved Memory: 0 B / 1.022 GiB
      └ Labels: executiondriver=native-0.2, kernelversion=4.0.5-boot2docker, operatingsystem=Boot2Docker 1.7.0 (TCL 6.3); master : 7960f90 - Thu Jun 18 18:31:45 UTC 2015, provider=virtualbox, storagedriver=aufs
     swarm-node-01: 192.168.99.103:2376
      └ Containers: 2
      └ Reserved CPUs: 0 / 1
      └ Reserved Memory: 0 B / 1.022 GiB
      └ Labels: executiondriver=native-0.2, kernelversion=4.0.5-boot2docker, operatingsystem=Boot2Docker 1.7.0 (TCL 6.3); master : 7960f90 - Thu Jun 18 18:31:45 UTC 2015, provider=virtualbox, storagedriver=aufs
     swarm-node-02: 192.168.99.104:2376
      └ Containers: 3
      └ Reserved CPUs: 0 / 1
      └ Reserved Memory: 0 B / 1.022 GiB
      └ Labels: executiondriver=native-0.2, kernelversion=4.0.5-boot2docker, operatingsystem=Boot2Docker 1.7.0 (TCL 6.3); master : 7960f90 - Thu Jun 18 18:31:45 UTC 2015, provider=virtualbox, storagedriver=aufs
    CPUs: 3
    Total Memory: 3.065 GiB

    “swarm-node-02” is running three containers and so lets look at the list of running containers:

    > eval "$(docker-machine env swarm-node-02)"
    > docker ps -a
    CONTAINER ID        IMAGE                             COMMAND                CREATED              STATUS              PORTS                    NAMES
    805f3587f5df        arungupta/wildfly-mysql-javaee7   "/opt/jboss/wildfly/   About a minute ago   Up About a minute   0.0.0.0:8080->8080/tcp   mywildfly
    ababc544df97        mysql                             "/entrypoint.sh mysq   5 minutes ago        Up 5 minutes        0.0.0.0:3306->3306/tcp   mysqldb
    45b015bc79f4        swarm:latest                      "/swarm join --addr    17 minutes ago       Up 17 minutes       2375/tcp                 swarm-agent
  4. Access the application as:

    curl http://$(docker-machine ip swarm-node-02):8080/employees/resources/employees

    to see the output as:

    <?xml version="1.0" encoding="UTF-8" standalone="yes"?><collection><employee><id>1</id><name>Penny</name></employee><employee><id>2</id><name>Sheldon</name></employee><employee><id>3</id><name>Amy</name></employee><employee><id>4</id><name>Leonard</name></employee><employee><id>5</id><name>Bernadette</name></employee><employee><id>6</id><name>Raj</name></employee><employee><id>7</id><name>Howard</name></employee><employee><id>8</id><name>Priya</name></employee></collection>

12.4. Deploy Java EE Application to Docker Swarm Cluster using Docker Compose

Multiple Containers Using Docker Compose explains how multi container applications can be easily started using Docker Compose.

  1. Connect to ‘swarm-node-02’:

    eval "$(docker-machine env swarm-node-02)"
  2. Stop the MySQL and WildFly containers:

    docker ps -a | grep wildfly | awk '{print $1}' | xargs docker rm -f
    docker ps -a | grep mysql | awk '{print $1}' | xargs docker rm -f
  3. Use the docker-compose.yml file explained in Multiple Containers Using Docker Compose to start the containers as:

    docker-compose up -d
    Creating wildflymysqljavaee7_mysqldb_1...
    Creating wildflymysqljavaee7_mywildfly_1...
  4. Check the containers running in the cluster as:

    eval "$(docker-machine env --swarm swarm-master)"
    docker info

    to see the output as:

    docker info
    Containers: 7
    Images: 5
    Role: primary
    Strategy: spread
    Filters: affinity, health, constraint, port, dependency
    Nodes: 3
     swarm-master: 192.168.99.102:2376
      └ Containers: 2
      └ Reserved CPUs: 0 / 1
      └ Reserved Memory: 0 B / 1.022 GiB
      └ Labels: executiondriver=native-0.2, kernelversion=4.0.5-boot2docker, operatingsystem=Boot2Docker 1.7.0 (TCL 6.3); master : 7960f90 - Thu Jun 18 18:31:45 UTC 2015, provider=virtualbox, storagedriver=aufs
     swarm-node-01: 192.168.99.103:2376
      └ Containers: 2
      └ Reserved CPUs: 0 / 1
      └ Reserved Memory: 0 B / 1.022 GiB
      └ Labels: executiondriver=native-0.2, kernelversion=4.0.5-boot2docker, operatingsystem=Boot2Docker 1.7.0 (TCL 6.3); master : 7960f90 - Thu Jun 18 18:31:45 UTC 2015, provider=virtualbox, storagedriver=aufs
     swarm-node-02: 192.168.99.104:2376
      └ Containers: 3
      └ Reserved CPUs: 0 / 1
      └ Reserved Memory: 0 B / 1.022 GiB
      └ Labels: executiondriver=native-0.2, kernelversion=4.0.5-boot2docker, operatingsystem=Boot2Docker 1.7.0 (TCL 6.3); master : 7960f90 - Thu Jun 18 18:31:45 UTC 2015, provider=virtualbox, storagedriver=aufs
    CPUs: 3
    Total Memory: 3.065 GiB
  5. Connect to ‘swarm-node-02’ again:

    eval "$(docker-machine env swarm-node-02)"

    and see the list of running containers as:

    docker ps -a
    CONTAINER ID        IMAGE                             COMMAND                CREATED             STATUS              PORTS                    NAMES
    b1e7d9bd2c09        arungupta/wildfly-mysql-javaee7   "/opt/jboss/wildfly/   38 seconds ago      Up 37 seconds       0.0.0.0:8080->8080/tcp   wildflymysqljavaee7_mywildfly_1
    ac9c967e4b1d        mysql:latest                      "/entrypoint.sh mysq   38 seconds ago      Up 38 seconds       3306/tcp                 wildflymysqljavaee7_mysqldb_1
    45b015bc79f4        swarm:latest                      "/swarm join --addr    20 minutes ago      Up 20 minutes       2375/tcp                 swarm-agent
  6. Application can then be accessed again using:

    curl http://$(docker-machine ip swarm-node-02):8080/employees/resources/employees

    and shows the output as:

    <?xml version="1.0" encoding="UTF-8" standalone="yes"?><collection><employee><id>1</id><name>Penny</name></employee><employee><id>2</id><name>Sheldon</name></employee><employee><id>3</id><name>Amy</name></employee><employee><id>4</id><name>Leonard</name></employee><employee><id>5</id><name>Bernadette</name></employee><employee><id>6</id><name>Raj</name></employee><employee><id>7</id><name>Howard</name></employee><employee><id>8</id><name>Priya</name></employee></collection>

13. Java EE Application on Kubernetes Cluster

Kubernetes is an open source system for managing containerized applications across multiple hosts, providing basic mechanisms for deployment, maintenance, and scaling of applications.
— github.com/GoogleCloudPlatform/kubernetes/

Kubernetes, or “k8s” in short, allows the user to provide declarative primitives for the desired state, for example “need 5 WildFly servers and 1 MySQL server running”. Kubernetes self-healing mechanisms, such as auto-restarting, re-scheduling, and replicating containers then ensure this state is met. The user just define the state and Kubernetes ensures that the state is met at all times on the cluster.

How is it related to Docker?

Docker provides the lifecycle management of containers. A Docker image defines a build time representation of the runtime containers. There are commands to start, stop, restart, link, and perform other lifecycle methods on these containers. Kubernetes uses Docker to package, instantiate, and run containerized applications.

How does Kubernetes simplify containerized application deployment?

A typical application would have a cluster of containers across multiple hosts. For example, your web tier (for example Undertow) might run as a few instances, and likely on a set of containers. Similarly, your application tier (for example, WildFly) would run on a different set of containers. The web tier would need to delegate the request to application tier. The web, application, and database tier would generally run on a separate set of containers. These containers would need to talk to each other. Using any of the solutions mentioned above would require scripting to start the containers, and monitoring/bouncing if something goes down. Kubernetes does all of that for the user after the application state has been defined.

13.1. Key Components

At a very high level, there are three key components:

  1. Pods are the smallest deployable units that can be created, scheduled, and managed. Its a logical collection of containers that belong to an application.

  2. Master is the central control point that provides a unified view of the cluster. There is a single master node that control multiple worker nodes.

  3. Node (née minion) is a worker node that run tasks as delegated by the master. Nodes can run one or more pods. It provides an application-specific “virtual host” in a containerized environment.

A picture is always worth a thousand words and so this is a high-level logical block diagram for Kubernetes:

kubernetes key components
Figure 41. Kubernetes Key Components

After the 50,000 feet view, lets fly a little lower at 30,000 feet and take a look at how Kubernetes make all of this happen. There are a few key components at Master and Node that make this happen.

  1. Replication Controller is a resource at Master that ensures that requested number of pods are running on nodes at all times.

  2. Service is an object on master that provides load balancing across a replicated group of pods. Label is an arbitrary key/value pair in a distributed watchable storage that the Replication Controller uses for service discovery.

  3. Kubelet Each node runs services to run containers and be managed from the master. In addition to Docker, Kubelet is another key service installed there. It reads container manifests as YAML files that describes a pod. Kubelet ensures that the containers defined in the pods are started and continue running.

  4. Master serves RESTful Kubernetes API that validate and configure Pod, Service, and Replication Controller.

13.2. Start Kubernetes Cluster

Kubernetes cluster can be easily started using Vagrant. There are two options to start the cluster - first using a downloaded Kubernetes distribution bundle and second by downloading the latest bundle as part of the install.

13.2.1. Using Previously Downloaded Kubernetes Distribution

  1. Setup a Kubernetes cluster as:

    cd kubernetes
    
    export KUBERNETES_PROVIDER=vagrant
    ./cluster/kube-up.sh

    The KUBERNETES_PROVIDER environment variable tells all of the various cluster management scripts which variant to use.

    Note
    This will take a few minutes, so be patience! Vagrant will provision each machine in the cluster with all the necessary components to run Kubernetes.

    It shows the output as:

    Starting cluster using provider: vagrant
    ... calling verify-prereqs
    ... calling kube-up
    Using credentials: vagrant:vagrant
    
    . . .
    
    Cluster validation succeeded
    Done, listing cluster services:
    
    Kubernetes master is running at https://10.245.1.2
    KubeDNS is running at https://10.245.1.2/api/v1/proxy/namespaces/kube-system/services/kube-dns
    KubeUI is running at https://10.245.1.2/api/v1/proxy/namespaces/kube-system/services/kube-ui

    Note down the address for Kubernetes master, https://10.245.1.2 in this case.

13.2.2. Download and Start the Cluster Together

  1. Alternatively, the cluster can also be started as:

    > curl -sS https://get.k8s.io | bash
    Downloading kubernetes release v0.21.1 to /Users/arungupta/tools/kubernetes.tar.gz
    --2015-07-13 15:56:54--  https://storage.googleapis.com/kubernetes-release/release/v0.21.1/kubernetes.tar.gz
    Resolving storage.googleapis.com... 74.125.28.128, 2607:f8b0:400e:c02::80
    Connecting to storage.googleapis.com|74.125.28.128|:443... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 117901998 (112M) [application/x-tar]
    Saving to: 'kubernetes.tar.gz'
    
    kubernetes.tar.gz               100%[=========================================================>] 112.44M  6.21MB/s   in 18s
    
    2015-07-13 15:57:13 (6.13 MB/s) - 'kubernetes.tar.gz' saved [117901998/117901998]
    
    . . .
    
    NAME                 STATUS    MESSAGE              ERROR
    controller-manager   Healthy   ok                   nil
    scheduler            Healthy   ok                   nil
    etcd-0               Healthy   {"health": "true"}   nil
    Cluster validation succeeded
    Done, listing cluster services:
    
    Kubernetes master is running at https://10.245.1.2
    KubeDNS is running at https://10.245.1.2/api/v1/proxy/namespaces/kube-system/services/kube-dns
    KubeUI is running at https://10.245.1.2/api/v1/proxy/namespaces/kube-system/services/kube-ui
    
    Kubernetes binaries at /Users/arungupta/tools/kubernetes/kubernetes/cluster/
    You may want to add this directory to your PATH in $HOME/.profile
    Installation successful!

13.2.3. Verify the Cluster

  1. Verify the Kubernetes cluster as:

    kubernetes> vagrant status
    Current machine states:
    
    master                    running (virtualbox)
    minion-1                  running (virtualbox)
    
    This environment represents multiple VMs. The VMs are all listed
    above with their current state. For more information about a specific
    VM, run `vagrant status NAME`.

    By default, the Vagrant setup will create a single Master and one node. Each VM will take 1 GB, so make sure you have at least 2GB to 4GB of free memory (plus appropriate free disk space).

    Note
    By default, only one node is created. This can be manipulated by setting an environment variable NUM_MINIONS variable to an integer before invoking kube-up.sh script.
    kubernetes cluster vagrant
    Figure 42. Kubernetes Cluster using Vagrant

    By default, each VM in the cluster is running Fedora, Kubelet is installed into ``systemd'', and all other Kubernetes services are running as containers on Master.

  2. Access https://10.245.1.2 (or whatever IP address is assigned to your kubernetes cluster start up log). This may present the warning as shown below:

    kubernetes master default output certificate

    Click on ‘Advanced’, on ‘Proceed to 10.245.1.2’, enter the username as ‘vagrant’ and password as ‘vagrant’ to see the output as:

    kubernetes master default output
    Figure 43. Kubernetes Output from Master

    Check the list of nodes as:

    > ./cluster/kubectl.sh get nodes
    NAME         LABELS                              STATUS
    10.245.1.3   kubernetes.io/hostname=10.245.1.3   Ready
  3. Check the list of pods:

    kubernetes> ./cluster/kubectl.sh get po
    NAME      READY     STATUS    RESTARTS   AGE
  4. Check the list of services running:

    kubernetes> ./cluster/kubectl.sh get se
    NAME         LABELS                                    SELECTOR   IP(S)        PORT(S)
    kubernetes   component=apiserver,provider=kubernetes   <none>     10.247.0.1   443/TCP
  5. Check the list of replication controllers:

    kubernetes> ./cluster/kubectl.sh get rc
    CONTROLLER   CONTAINER(S)   IMAGE(S)   SELECTOR   REPLICAS

13.3. Deploy Java EE Application (multiple configuration files)

Pods, and the IP addresses assigned to them, are ephemeral. If a pod dies then Kubernetes will recreate that pod because of its self-healing features, but it might recreate it on a different host. Even if it is on the same host, a different IP address could be assigned to it. And so any application cannot rely upon the IP address of the pod.

Kubernetes services is an abstraction which defines a logical set of pods. A service is typically back-ended by one or more physical pods (associated using labels), and it has a permanent IP address that can be used by other pods/applications. For example, WildFly pod can not directly connect to a MySQL pod but can connect to MySQL service. In essence, Kubernetes service offers clients an IP and port pair which, when accessed, redirects to the appropriate backends.

kubernetes service
Figure 44. Kubernetes Service
Note
In this case, all the pods are running on a single node. This is because, that is the default number for a Kubernetes cluster. The pod can very be on another node if more number of nodes are configured to start in the cluster.

Any Service that a Pod wants to access must be created before the Pod itself, or else the environment variables will not be populated.

The order of Service and the targeted Pods does not matter. However Service needs to be started before any other Pods consuming the Service are started.

13.3.1. Start MySQL Pod

  1. Start MySQL Pod:

    ./cluster/kubectl.sh create -f ../../attendees/kubernetes/app-mysql-pod.yaml
    pods/mysql-pod

    It uses the following configuration file:

    apiVersion: v1
    kind: Pod
    metadata:
      name: mysql-pod
      labels:
        name: mysql-pod
        context: docker-k8s-lab
    spec:
      containers:
        -
          name: mysql
          image: mysql:latest
          env:
            -
              name: "MYSQL_USER"
              value: "mysql"
            -
              name: "MYSQL_PASSWORD"
              value: "mysql"
            -
              name: "MYSQL_DATABASE"
              value: "sample"
            -
              name: "MYSQL_ROOT_PASSWORD"
              value: "supersecret"
          ports:
            -
              containerPort: 3306
  2. Get status of the Pod:

    kubernetes> ./cluster/kubectl.sh get -w po
    NAME        READY     STATUS    RESTARTS   AGE
    mysql-pod   0/1       Pending   0          4s
    NAME        READY     STATUS    RESTARTS   AGE
    mysql-pod   0/1       Running   0          44s
    mysql-pod   1/1       Running   0         44s

    -w watches for changes to the requested object. Wait for the MySQL pod to be in Running status.

13.3.2. Start MySQL service

  1. Start MySQL Service:

    ./cluster/kubectl.sh create -f ../../attendees/kubernetes/app-mysql-service.yaml
    services/mysql-service

    It uses the following configuration file:

    apiVersion: v1
    kind: Service
    metadata:
      name: mysql-service
      labels:
        name: mysql-pod
        context: docker-k8s-lab
    spec:
      ports:
        # the port that this service should serve on
        - port: 3306
      # label keys and values that must match in order to receive traffic for this service
      selector:
        name: mysql-pod
        context: docker-k8s-lab

    Once again, the label “context: docker-k8s-lab” is used. This simplifies querying the created pods later on.

  2. Get status of the Service:

    ./cluster/kubectl.sh get -w se
    NAME            LABELS                                    SELECTOR                                IP(S)          PORT(S)
    kubernetes      component=apiserver,provider=kubernetes   <none>                                  10.247.0.1     443/TCP
    mysql-service   context=docker-k8s-lab,name=mysql-pod     context=docker-k8s-lab,name=mysql-pod   10.247.63.43   3306/TCP

    If multiple services are running, then it can be narrowed by specifying the labels:

    ./cluster/kubectl.sh  get -w po -l context=docker-k8s-lab,name=mysql-pod
    NAME        READY     STATUS    RESTARTS   AGE
    mysql-pod   1/1       Running   0          4m

    This is also the selector label used by Service to target Pods.

    When a Service is run on a node, the kubelet adds a set of environment variables for each active Service. It supports both Docker links compatible variables and simpler {SVCNAME}_SERVICE_HOST and {SVCNAME}_SERVICE_PORT variables, where the Service name is upper-cased and dashes are converted to underscores.

    Our service name is “mysql-service” and so MYSQL_SERVICE_SERVICE_HOST and MYSQL_SERVICE_SERVICE_PORT variables are available to other pods.

Kubernetes also allows services to be resolved using DNS configuration. Send a Pull Request for adding this functionality to the lab as explained in #62.

13.3.3. Start WildFly Replication Controller

  1. Start WildFly replication controller:

    ./cluster/kubectl.sh create -f ../../attendees/kubernetes/app-wildfly-rc.yaml
    replicationcontrollers/wildfly-rc

    It uses the following configuration file:

    apiVersion: v1
    kind: ReplicationController
    metadata:
      name: wildfly-rc
      labels:
        name: wildfly
        context: docker-k8s-lab
    spec:
      replicas: 1
      template:
        metadata:
          labels:
            name: wildfly
        spec:
          containers:
          - name: wildfly-rc-pod
            image: arungupta/wildfly-mysql-javaee7:k8s
            ports:
            - containerPort: 8080
  2. Check status of the Pod inside Replication Controller:

    ./cluster/kubectl.sh get po
    NAME               READY     STATUS    RESTARTS   AGE
    mysql-pod          1/1       Running   0          1h
    wildfly-rc-w2kk5   1/1       Running   0          6m
  3. Get IP address of the Pod:

    ./cluster/kubectl.sh get -o template po wildfly-rc-w2kk5 --template={{.status.podIP}}
    10.246.1.23

13.3.4. Access the application (using node)

  1. Log in to node:

    vagrant ssh minion-1
  2. Access the application using curl http://10.246.1.23:8080/employees/resources/employees/ and replace IP address with the one obtained earlier:

    Last login: Thu Jul 16 00:24:36 2015 from 10.0.2.2
    [vagrant@kubernetes-minion-1 ~]$ curl http://10.246.1.23:8080/employees/resources/employees/
    <?xml version="1.0" encoding="UTF-8" standalone="yes"?><collection><employee><id>1</id><name>Penny</name></employee><employee><id>2</id><name>Sheldon</name></employee><employee><id>3</id><name>Amy</name></employee><employee><id>4</id><name>Leonard</name></employee><employee><id>5</id><name>Bernadette</name></employee><employee><id>6</id><name>Raj</name></employee><employee><id>7</id><name>Howard</name></employee><employee><id>8</id><name>Priya</name></employee></collection>

13.3.5. Access the application (using proxy)

13.4. Deploy Java EE Application (one configuration file)

Kubernetes allow multiple resources to be specified in a single configuration file. This allows to create a “Kubernetes Application” that can consists of multiple resources easily.

Previous section showed how to deploy the Java EE application using multiple configuration files. This application can be delpoyed using a single configuration file as well.

  1. Start the application using the configuration file:

    apiVersion: v1
    kind: Pod
    metadata:
      name: mysql-pod
      labels:
        name: mysql-pod
        context: docker-k8s-lab
    spec:
      containers:
        -
          name: mysql
          image: mysql:latest
          env:
            -
              name: "MYSQL_USER"
              value: "mysql"
            -
              name: "MYSQL_PASSWORD"
              value: "mysql"
            -
              name: "MYSQL_DATABASE"
              value: "sample"
            -
              name: "MYSQL_ROOT_PASSWORD"
              value: "supersecret"
          ports:
            -
              containerPort: 3306
    ----
    apiVersion: v1
    kind: Service
    metadata:
      name: mysql-service
      labels:
        name: mysql-pod
        context: docker-k8s-lab
    spec:
      ports:
        # the port that this service should serve on
        - port: 3306
      # label keys and values that must match in order to receive traffic for this service
      selector:
        name: mysql-pod
        context: docker-k8s-lab
    ----
    apiVersion: v1
    kind: ReplicationController
    metadata:
      name: wildfly-rc
      labels:
        name: wildfly
        context: docker-k8s-lab
    spec:
      replicas: 1
      template:
        metadata:
          labels:
            name: wildfly
        spec:
          containers:
          - name: wildfly-rc-pod
            image: arungupta/wildfly-mysql-javaee7:k8s
            ports:
            - containerPort: 8080

    Notice that each section, one each for MySQL Pod, MySQL Service, and WildFly Replication Controller, is separated by ----.

  2. Start the application:

    ./cluster/kubectl.sh create -f ../../attendees/kubernetes/app.yaml
    pods/mysql-pod
    services/mysql-service
    replicationcontrollers/wildfly-rc
  3. Access the application using Access the application (using node) or Access the application (using proxy).

13.5. Rescheduling Pods

Replication Controller ensures that specified number of pod “replicas” are running at any one time. If there are too many, the replication controller kills some pods. If there are too few, it starts more.

WildFly Replication Controller is already running with one Pod. Lets delete this Pod and see how a new Pod is automatically rescheduled.

  1. Find the Pod’s name:

    ./cluster/kubectl.sh get po
    NAME               READY     STATUS    RESTARTS   AGE
    wildfly-rc-w2kk5   1/1       Running   0          6m
  2. Delete the Pod:

    ./cluster/kubectl.sh delete po wildfly-rc-w2kk5
    pods/wildfly-rc-w2kk5

    Status of the Pods can be seen in another shell:

    ./cluster/kubectl.sh get -w po
    NAME               READY     STATUS    RESTARTS   AGE
    wildfly-rc-w2kk5   1/1       Running   0          2m
    NAME               READY     STATUS    RESTARTS   AGE
    wildfly-rc-xz6wu   0/1       Pending   0         2s
    wildfly-rc-xz6wu   0/1       Pending   0         2s
    wildfly-rc-xz6wu   0/1       Pending   0         12s
    wildfly-rc-xz6wu   0/1       Running   0         14s
    wildfly-rc-xz6wu   1/1       Running   0         22s

    Notice how Pod with name “wildfly-rc-w2kk5” was deleted and a new Pod with the name “wildfly-rc-xz6wu” was created.

13.6. Scaling Pods

Replication Controller allows dynamic scaling up and down of Pods.

  1. Scale up the number of Pods:

    ./cluster/kubectl.sh scale --replicas=2 rc wildfly-rc
    scaled
  2. Status of the Pods can be seen in another shell:

    ./cluster/kubectl.sh get -w po
    NAME               READY     STATUS    RESTARTS   AGE
    wildfly-rc-bgtkg   1/1       Running   0          3m
    NAME               READY     STATUS    RESTARTS   AGE
    wildfly-rc-bymu7   0/1       Pending   0          2s
    wildfly-rc-bymu7   0/1       Pending   0         2s
    wildfly-rc-bymu7   0/1       Pending   0         2s
    wildfly-rc-bymu7   0/1       Running   0         3s
    wildfly-rc-bymu7   1/1       Running   0         12s

    Notice a new Pod with the name “wildfly-rc-bymu7” is created.

  3. Scale down the number of Pods:

    ./cluster/kubectl.sh scale --replicas=1 rc wildfly-rc
    scaled
  4. Status of the Pods using -w is not shown correctly #11338. But status of the Pods can be seen correctly as:

    ./cluster/kubectl.sh get po
    NAME               READY     STATUS    RESTARTS   AGE
    wildfly-rc-bgtkg   1/1       Running   0          9m

    Notice only one Pod is running now.

13.7. Application Logs

  1. Get a list of the Pods:

    ./cluster/kubectl.sh get po
    NAME               READY     STATUS    RESTARTS   AGE
    mysql-pod          1/1       Running   0          18h
    wildfly-rc-w2kk5   1/1       Running   0          16h
  2. Get logs for the WildFly Pod:

    ./cluster/kubectl.sh logs wildfly-rc-w2kk5
    => Starting WildFly server
    => Waiting for the server to boot
    =========================================================================
    
      JBoss Bootstrap Environment
    
      JBOSS_HOME: /opt/jboss/wildfly
    
      . . .

Logs can be obtained for any Kubernetes resources using this way. Alternatively, the logs can also be seen by logging into the node:

  1. Log in to the node VM:

    > vagrant ssh minion-1
    Last login: Fri Jun  5 23:01:36 2015 from 10.0.2.2
    [vagrant@kubernetes-minion-1 ~]$
  2. Log in as root:

    [vagrant@kubernetes-minion-1 ~]$ su -
    Password:
    [root@kubernetes-minion-1 ~]#

    Default root password for VM images created by Vagrant is ‘vagrant’.

  3. See the list of Docker containers running on this VM:

    docker ps
  4. View WildFly log as:

    docker logs $(docker ps | grep arungupta/wildfly | awk '{print $1}')
  5. View MySQL log as:

    docker logs <CID>

13.8. Delete Kubernetes Resources

Individual resources (service, replication controller, or pod) can be deleted by using delete command instead of create command. Alternatively, all services and replication controllers can be deleted using a label as:

kubectl delete -l se,po context=docker-k8s-lab

13.9. Stop Kubernetes Cluster

> ./cluster/kube-down.sh
Bringing down cluster using provider: vagrant
==> minion-1: Forcing shutdown of VM...
==> minion-1: Destroying VM and associated drives...
==> master: Forcing shutdown of VM...
==> master: Destroying VM and associated drives...
Done

13.10. Debug Kubernetes Master

  1. Log in to the master as:

    vagrant ssh master
    Last login: Wed Jul 15 20:36:32 2015 from 10.0.2.2
    [vagrant@kubernetes-master ~]$
  2. Log in as root:

    [vagrant@kubernetes-master ~]$ su -
    Password:
    [root@kubernetes-master ~]#

    Default root password for VM images created by Vagrant is ‘vagrant’.

  3. Check the containers running on master:

    CONTAINER ID        IMAGE                                                                               COMMAND                CREATED             STATUS              PORTS               NAMES
    dc59a764953c        gcr.io/google_containers/etcd:2.0.12                                                "/bin/sh -c '/usr/lo   20 hours ago        Up 20 hours                             k8s_etcd-container.fa2ab1d9_etcd-server-kubernetes-master_default_7b64ecafde589b94a342982699601a19_2b69c4d5
    b722e22d3ddb        gcr.io/google_containers/kube-scheduler:d1107ff3b8fcdcbf5a9d78d9d6dbafb1            "/bin/sh -c '/usr/lo   20 hours ago        Up 20 hours                             k8s_kube-scheduler.7501c229_kube-scheduler-kubernetes-master_default_98b354f725c1589ea5a12119795546ae_b81b9740
    38a73e342866        gcr.io/google_containers/kube-controller-manager:fafaf8100ccc963e643b55e35386d713   "/bin/sh -c '/usr/lo   20 hours ago        Up 20 hours                             k8s_kube-controller-manager.db050993_kube-controller-manager-kubernetes-master_default_f5c25224fbfb2de87e1e5c35e6b3a293_dcd4cb5d
    01001de6409e        gcr.io/google_containers/kube-apiserver:cff9e185796caa8b281e7d961aea828b            "/bin/sh -c '/usr/lo   20 hours ago        Up 20 hours                             k8s_kube-apiserver.7e06f4e1_kube-apiserver-kubernetes-master_default_829f8c23fd5fc7951253cac7618447fc_b39c0a5d
    0f8ccb144ece        gcr.io/google_containers/pause:0.8.0                                                "/pause"               20 hours ago        Up 20 hours                             k8s_POD.e4cc795_kube-scheduler-kubernetes-master_default_98b354f725c1589ea5a12119795546ae_eb1efcac
    0b8f527456c0        gcr.io/google_containers/pause:0.8.0                                                "/pause"               20 hours ago        Up 20 hours                             k8s_POD.e4cc795_kube-apiserver-kubernetes-master_default_829f8c23fd5fc7951253cac7618447fc_5dd4dee7
    39d9c41ab1a2        gcr.io/google_containers/pause:0.8.0                                                "/pause"               20 hours ago        Up 20 hours                             k8s_POD.e4cc795_kube-controller-manager-kubernetes-master_default_f5c25224fbfb2de87e1e5c35e6b3a293_522972ae
    d970ddff7046        gcr.io/google_containers/pause:0.8.0                                                "/pause"               20 hours ago        Up 20 hours                             k8s_POD.e4cc795_etcd-server-kubernetes-master_default_7b64ecafde589b94a342982699601a19_fa75b27f

14. Common Docker Commands

Here is the list of commonly used Docker commands:

Purpose Command

Image

Build an image

docker build --rm=true .

Install an image

docker pull ${IMAGE}

List of installed images

docker images

List of installed images (detailed listing)

docker images --no-trunc

Remove an image

docker rmi ${IMAGE_ID}

Remove all untagged images

docker rmi $(docker images | grep “^” | awk “{print $3}”)

Remove all images

docker rm $(docker ps -aq)

Remove dangling images

docker rmi $(docker images --quiet --filter "dangling=true")

Containers

Run a container

docker run

List of running containers

docker ps

List of all containers

docker ps -a

Stop a container

docker stop ${CID}

Stop all running containers

docker stop docker ps -q

List all exited containers with status 1

docker ps -a --filter "exited=1"

Remove a container

docker rm ${CID}

Remove container by a regular expression

docker ps -a | grep wildfly | awk '{print $1}' | xargs docker rm -f

Remove all exited containers

docker rm -f $(docker ps -a | grep Exit | awk '{ print $1 }')

Remove all containers

docker rm $(docker ps -aq)

Find IP address of the container

docker inspect --format '{{ .NetworkSettings.IPAddress }}' ${CID}

Attach to a container

docker attach ${CID}

Open a shell in to a container

docker exec -it ${CID} bash

Get container id for an image by a regular expression

docker ps | grep wildfly | awk '{print $1}'

15. Troubleshooting

15.1. Network Timed Out

Depending upon the network speed and restrictions, you may not be able to download Docker images from Docker Hub. The error message may look like:

$ docker pull arungupta/wildfly-mysql-javaee7
Using default tag: latest
Pulling repository docker.io/arungupta/wildfly-mysql-javaee7
Network timed out while trying to connect to https://index.docker.io/v1/repositories/arungupta/wildfly-mysql-javaee7/images. You may want to check your internet connection or if you are behind a proxy.

This section provide a couple of alternatives to solve this.

15.1.1. Restart Docker Machine

It seems like Docker Machine gets into a strange state and restarting it fixes that.

docker-machine restart <MACHINE_NAME>
eval $(docker-machine env <MACHINE_NAME>)

15.1.2. Loading Images Offline

Images can be downloaded from a previously saved .tar file. All images required for this workshop can be downloaded from:

Load the tar file:

docker load -i <path to image tar file>

For example:

docker load -i arungupta-javaee7-hol.tar

Now docker images should show the image.

15.2. Cannot create Docker Machine on Windows

Are you not able to create Docker Machine on Windows?

Try starting a cmd with Administrator privileges and then give the command again.

15.3. No route to host

Accessing the WildFly and MySQL sample in Kubernetes gives 404 when you give the command curl http://10.246.1.23:8080/employees/resources/employees/.

This may be resolved by stopping the node and restarting the cluster again:

vagrant halt minion-1
./cluster/kube-up.sh

These commands need to be given in the ‘kubernetes’ directory.

16. References